Training Set Parallelism in Pahra Architecture

نویسندگان

  • Liberios VOKOROKOS
  • Norbert ÁDÁM
  • Anton BALÁŽ
چکیده

Multilayered feed-forward neural networks trained with back-propagation algorithm are one of the most popular “online” artificial neural networks. These networks are showing strong inherit parallelism because of the influence of high number of simple computational elements. So it is natural to try to implement this kind of parallelism on parallel computer architecture. The Parallel Hybrid Ring Architecture (PAHRA), which is described in this article, provides flexible platform for simulation of multilayered feed-forward neural networks trained with back-propagation algorithm. The computational model of given architecture, bound to the modified error back-propagation algorithm, allows to describe the formal elements of parallel implementation of multilayered feed-forward neural network. It also allows the mathematical tool for verification of performance, which is used in simulation experiments of multilayered feed-forward network on specific hardware platform.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Modeling of Feedforward Neural Network in PAHRA Architecture

One of the most popular neural networks are multilayered feedforward neural networks, which represent the most standard configuration of biological inspired mathematical models of simplified neural system. These networks represent massive parallel systems with a high number of simple process elements and therefore it is natural to try to implement this kind of systems on parallel computer archi...

متن کامل

Parallel Algorithms for the Training Process of a Neural Network-Based System

This paper addresses the problem of developing efficient parallel algorithms for the training procedure of a neural network–based Fingerprint Image Comparison (FIC) system. The target architecture is assumed to be a coarse-grain distributed-memory parallel architecture. Two types of parallelism—node parallelism and training set parallel-ism (TSP)—are investigated. Theoretical analysis and exper...

متن کامل

CompanyDAPHNE : DATA PARALLELISMNEURAL NETWORK SIMULATOR

In this paper we describe the guideline of Daphne, a parallel simulator for supervised recurrent neural networks trained by Backpropagation through time. The simulator has a modular structure, based on a parallel training kernel running on the CM-2 Connection Machine. The training kernel is written in CM Fortran in order to exploit some advantages of the slicewise execution model. The other mod...

متن کامل

A Parallel Implementation of Backpropagation Neural Network on MasPar MP-1

In this paper, we explore the parallel implementation of the backpropagation algorithm with and without hidden layers on MasPar MP-1. This implementation is based on a SIMD architecture, and uses a backpropagation model. Our implementation uses weight batching versus on-line updating of the weights which is used by most serial and parallel implementations of backpropagation. This method results...

متن کامل

Towards Transparent Parallelization of Connectionist Systems

Much work has been done in the area of parallel simulation of connectionist systems. However, usually parallel implementation issues for artiicial neural networks have been discussed in general terms, but the actual parallel programs implement speciic network models and are written in programming languages like C or C++. This paper deals with the transparent parallelization of neural networks. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007